15 research outputs found

    A Case for Rebel, a DSL for product specifications

    Get PDF
    International audienceno abstrac

    Solving the bank with Rebel: on the design of the Rebel specification language and its application inside a bank

    Get PDF
    Large organizations like banks suffer from the ever growing complexity of their systems. Evolving the software becomes harder and harder since a single change can affect a much larger part of the system than predicted upfront. A large contributing factor to this problem is that the actual domain knowledge is often implicit, incomplete, or out of date, making it difficult to reason about the correct behavior of the system as a whole. With Rebel we aim to capture and centralize the domain knowledge and relate it to the running systems. Rebel is a formal specification language for controlling the intrinsic complexity of software for financial enterprise systems. In collaboration with ING, a large Dutch bank, we developed the Rebel specification language and an Integrated Specification Environment (ISE), currently offering automated simulation and checking of Rebel specifications using a Satisfiability Modulo Theories (SMT) solver. In this paper we report on our design choices for Rebel, the implementation and features of the ISE, and our initial observations on the application of Rebel inside the bank

    Constraint-based run-time state migration for live modeling

    Get PDF
    Live modeling enables modelers to incrementally update models as they are running and get immediate feedback about the impact of their changes. Changes introduced in a model may trigger inconsistencies between the model and its run-time state (e.g., deleting the current state in a statemachine); effectively requiring to migrate the run-time state to comply with the updated model. In this paper, we introduce an approach that enables to automatically migrate such runtime state based on declarative constraints defined by the language designer. We illustrate the approach using Nextep, a meta-modeling language for defining invariants and migration constraints on run-time state models. When a model changes, Nextep employs model finding techniques, backed by a solver, to automatically infer a new run-time model that satisfies the declared constraints. We apply Nextep to define migration strategies for two DSLs, and report on its expressiveness and performance

    ICST 2021 version of Rebel2

    No full text
    This version contains the information presented in the ICST'21 'Modeling with Mocking' paper

    Modeling with mocking

    Get PDF
    Writing formal specifications often requires users to abstract from the original problem. Especially when verification techniques such as model checking are used. Without applying abstraction the search space the model checker need to traverse tends to grow quickly beyond the scope of what can be checked within reasonable time.The downside of this need to omit details is that it increases the distance to the implementation. Ideally, the created specifications could be used to generate software from (either manually or automatically). But having an incomplete description of the desired system is not enough for this purpose.In this work we introduce the Rebel2 specification language. Rebel2 lets the user write full system specifications in the form of state machines with data without the need to apply abstraction while still preserving the ability to verify non-trivial properties. This is done by allowing the user to forget and mock specifications when running the model checker. The original specifications are untouched by these techniques.We compare the expressiveness of Rebel2 and the effectiveness of mock and forget by implementing two case studies: one from the automotive domain and one from the banking domain. We find that Rebel2 is expressive enough to implement both case studies in a concise manner. Next to that, when performing checks in isolation, mocking can speed up model checking significantly

    Towards quantitative analysis of coronary CTA

    No full text
    The current high spatial and temporal resolution, multi-slice imaging capability, and ECG-gated reconstruction of multi-slice computed tomography (MSCT) allows the non-invasive 3D imaging of opacified coronary arteries. MSCT coronary angiography studies are currently carried out by the visual inspection of the degree of stenosis and it has been shown that the assessment with sensitivities and specificities of 90% and higher can be achieved. To increase the reproducibility of the analysis, we present a method that performs the quantitative analysis of coronary artery diseases with limited user interaction: only the positioning of one or two seed points is required. The method allows the segmentation of the entire left or right coronary tree by the positioning of a single seed point, and an extensive evaluation of a particular vessel segment by placing a proximal and distal seed point. The presented method consists of: (1) the segmentation of the coronary vessels, (2) the extraction of the vessel centerline, (3) the reformatting of the image volume, (4) a combination of longitudinal and transversal contour detection, and (5) the quantification of vessel morphological parameters. The method is illustrated in this paper by the segmentation of the left and right coronary trees and by the analysis of a coronary artery segment. The sensitivity of the positioning of the seed points is studied by varying the position of the proximal and distal seed points with a standard deviation of 6 and 8 mm (along the vessel's course) respectively. It is shown that only close to the individual seed points the vessel centerlines deviate and that for more than 80% of the centerlines the paths coincide. Since the quantification depends on the determination of the centerline, no user variability is expected as long as the seed points are positioned reasonably far away from the vessel lesion. The major bottleneck of MSCT imaging of the coronary arteries is the potential lack of image quality due to limitations in the spatial and temporal resolution, irregular or high heart beat, respiratory effects, and variations of the distribution of the contrast agent: the number of rejected vessel segments in diagnostic studies is currently still too high for implementation in routine clinical practice. Also for the automated quantitative analysis of the coronary arteries high image quality is required. However, based upon the trend in technological development of MSCT scanners, there is no doubt that the quantitative analysis of MSCT coronary angiography will benefit from these technological advances in the near futur

    CT blurring induced bias of quantitative in-stent restenosis analyses

    No full text
    Rational and Objective: In CT systems, blurring is the main limiting factor for imaging in-stent restenosis. The aim of this study is to systematically analyze the effect of blurring related biases on the quantitative assessment of in-stent restenosis and to evaluate potential correction methods. Methods: 3D analytical models of a blurred, stented vessel are presented to quantify blurring related artifacts in the stent diameter measurement. Two correction methods are presented for an improved stent diameter measurement. We also examine the suitability of deconvolution techniques for correcting blurring artifacts. Results: Blurring results in a shift of the maximum of the signal intensity towards the center position of the stent, resulting in an underestimation of the stent diameter. This shift can be expressed as a function of the stent radius and width of the point spread function. The correction for this phenomenon reduces the error with 75 percent. Deconvolution reduces the blurring artifacts but introduces a ringing artifact. Conclusion: The analytical vessel models are well suited to study the influence of various parameters on blurring-induced artifacts. The blurring-related underestimation of the stent diameter can significantly be reduced using the presented corrections. Care should be taken into choosing suitable deconvolution filters since they may introduce new artifacts
    corecore